734 research outputs found

    Low Latency Low Loss Media Delivery Utilizing In-Network Packet Wash

    Get PDF
    This paper presents new techniques and mechanisms for carrying streams of layered video using Scalable Video Coding (SVC) from servers to clients, utilizing the Packet Wash mechanism which is part of the Big Packet Protocol (BPP). BPP was designed to handle the transfer of packets for high-bandwidth, low-latency applications, aiming to overcome a number of issues current networks have with high precision services. One of the most important advantages of BPP is that it allows the dynamic adaption of packets during transmission. BPP uses Packet Wash to reduce the payload, and the size of a packet by eliminating specific chunks. For video, this means cutting out specific segments of the transferred video, rather than dropping packets, as happens with UDP based transmission, or retrying the transmission of packets, as happens with TCP. The chunk elimination approach is well matched with SVC video, and these techniques and mechanisms are utilized and presented. An evaluation of the performance is provided, plus a comparison of using UDP or TCP, which are the other common approaches for carrying media over IP. Our main contributions are the mapping of SVC video into BPP packets to provide low latency, low loss delivery, which provides better QoE performance than either UDP or TCP, when using those techniques and mechanisms. This approach has proved to be an effective way to enhance the performance of video streaming applications, by obtaining continuous delivery, while maintaining guaranteed quality at the receiver. In this work we have successfully used an H264 SVC encoded video for layered video transmission utilizing BPP, and can demonstrate video delivery with low latency and low loss in limited bandwidth environments

    The Impact of Encoding and Transport for Massive Real-time IoT Data on Edge Resource Consumption

    Get PDF
    Edge microservice applications are becoming a viable solution for the execution of real-time IoT analytics, due to their rapid response and reduced latency. With Edge Computing, unlike the central Cloud, the amount of available resource is constrained and the computation that can be undertaken is also limited. Microservices are not standalone, they are devised as a set of cooperating tasks that are fed data over the network through specific APIs. The cost of processing these feeds of data in real-time, especially for massive IoT configurations, is however generally overlooked. In this work we evaluate the cost of dealing with thousands of sensors sending data to the edge with the commonly used encoding of JSON over REST interfaces, and compare this to other mechanisms that use binary encodings as well as streaming interfaces. The choice has a big impact on the microservice implementation, as a wrong selection can lead to excessive resource consumption, because using a less efficient encoding and transport mechanism results in much higher resource requirements, even to do an identical job

    End-to-end slices to orchestrate resources and services in the cloud-to-edge continuum

    Get PDF
    Fog computing, combined with traditional cloud computing, offers an inherently distributed infrastructure – referred to as the cloud-to-edge continuum – that can be used for the execution of low-latency and location-aware IoT services. The management of such an infrastructure is complex: resources in multiple domains need to be accessed by several tenants, while an adequate level of isolation and performance has to be guaranteed. This paper proposes the dynamic allocation of end-to-end slices to perform the orchestration of resources and services in such a scenario. These end-to-end slices require a unified resource management approach that encompasses both data centre and network resources. Currently, fog orchestration is mainly focused on the management of compute resources, likewise, the slicing domain is specifically centred solely on the creation of isolated network partitions. A unified resource orchestration strategy, able to integrate the selection, configuration and management of compute and network resources, as part of a single abstracted object, is missing. This work aims to minimise the silo-effect, and proposes end-to-end slices as the foundation for the comprehensive orchestration of compute resources, network resources, and services in the cloud-to-edge continuum, as well acting as the basis for a system implementation. The concept of the end-to-end slice is formally described via a graph-based model that allows for dynamic resource discovery, selection and mapping via different algorithms and optimisation goals; and a working system is presented as the way to build slices across multiple domains dynamically, based on that model. These are independently accessible objects that abstract resources of various providers – traded via a Marketplace – with compute slices, allocated using the bare-metal cloud approach, being interconnected to each other via the connectivity of network slices. Experiments, carried out on a real testbed, demonstrate three features of the end-to-end slices: resources can be selected, allocated and controlled in a softwarised fashion; tenants can instantiate distributed IoT services on those resources transparently; the performance of a service is absolutely not affected by the status of other slices that share the same resource infrastructure

    Information Exchange Management as a Service for Network Function Virtualization Environments

    Get PDF
    The Internet landscape is gradually adopting new communication paradigms characterized by flexibility and adaptability to the resource constraints and service requirements, including network function virtualization (NFV), software-defined networks, and various virtualization and network slicing technologies. These approaches need to be realized from multiple management and network entities exchanging information between each other. We propose a novel information exchange management as a service facility as an extension to ETSI's NFV management and orchestration framework, namely the virtual infrastructure information service (VIS). VIS is characterized by the following properties: 1) it exhibits the dynamic characteristics of such network paradigms; 2) it supports information flow establishment, operation, and optimization; and 3) it provides a logically centralized control of the established information flows with respect to the diverse demands of the entities exchanging information elements. Our proposal addresses the information exchange management requirements of NFV environments and is information-model agnostic. This paper includes an experimental analysis of its main functional and non-functional characteristics

    A flexible information service for management of virtualized software-defined infrastructures

    Get PDF
    Summary There is a major shift in the Internet towards using programmable and virtualized network devices, offering significant flexibility and adaptability. New networking paradigms such as software-defined networking and network function virtualization bring networks and IT domains closer together using appropriate architectural abstractions. In this context, new and novel information management features need to be introduced. The deployed management and control entities in these environments should have a clear, and often global, view of the network environment and should exchange information in alternative ways (e.g. some may have real-time constraints, while others may be throughput sensitive). Our work addresses these two network management features. In this paper, we define the research challenges in information management for virtualized highly dynamic environments. Along these lines, we introduce and present the design details of the virtual infrastructure information service, a new management information handling framework that (i) provides logically centralized information flow establishment, optimization, coordination, synchronization and management with respect to the diverse management and control entity demands; (ii) is designed according to the characteristics and requirements of software-defined networking and network function virtualization; and (iii) inter-operates with our own virtualized infrastructure framework. Evaluation results demonstrating the flexible and adaptable behaviour of the virtual infrastructure information service and its main operations are included in the paper. Copyright © 2016 John Wiley & Sons, Ltd

    From Category Theory to Functional Programming: A Formal Representation of Intent

    Get PDF
    The possibility of managing network infrastructures through software-based programmable interfaces is becoming a cornerstone in the evolution of communication networks. The Intent-Based Networking (IBN) paradigm is a novel declarative approach towards network management proposed by a few Standards Developing Organizations. This paradigm offers a high-level interface for network management that abstracts the underlying network infrastructure and allows the specification of network directives using natural language. Since the IBN concept is based on a declarative approach to network management and programmability, we argue that the use of declarative programming to achieve IBN could uncover valuable insights for this new network paradigm. This paper proposes a formalization of this declarative paradigm obtained with concepts from category theory. Taking this approach to Intent, an initial implementation of this formalization is presented using Haskell, a well-known functional programming language

    Addressing health literacy in patient decision aids

    No full text
    MethodsWe reviewed literature for evidence relevant to these two aims. When high-quality systematic reviews existed, we summarized their evidence. When reviews were unavailable, we conducted our own systematic reviews.ResultsAim 1: In an existing systematic review of PtDA trials, lower health literacy was associated with lower patient health knowledge (14 of 16 eligible studies). Fourteen studies reported practical design strategies to improve knowledge for lower health literacy patients. In our own systematic review, no studies reported on values clarity per se, but in 2 lower health literacy was related to higher decisional uncertainty and regret. Lower health literacy was associated with less desire for involvement in 3 studies, less question-asking in 2, and less patient-centered communication in 4 studies; its effects on other measures of patient involvement were mixed. Only one study assessed the effects of a health literacy intervention on outcomes; it showed that using video to improve the salience of health states reduced decisional uncertainty. Aim 2: In our review of 97 trials, only 3 PtDAs overtly addressed the needs of lower health literacy users. In 90% of trials, user health literacy and readability of the PtDA were not reported. However, increases in knowledge and informed choice were reported in those studies in which health literacy needs were addressed.ConclusionLower health literacy affects key decision-making outcomes, but few existing PtDAs have addressed the needs of lower health literacy users. The specific effects of PtDAs designed to mitigate the influence of low health literacy are unknown. More attention to the needs of patients with lower health literacy is indicated, to ensure that PtDAs are appropriate for lower as well as higher health literacy patients

    Orchestration: The need for System and Language Abstractions

    Get PDF
    This talk is focussed towards people in the arenas of orchestration and higher level management, where they are attempting the role out of SDN, NFV, and SFC. • It is a talk with no measurements and no experiments ! • It is not really about right / wrong, more that there are opportunities to use working and well tested concepts. • This is about a perspective from the viewpoint of operating systems and programming languages. • We need to encourage people to design / built / utilize more in the area of abstractions, layering, and separation of concerns, by showing the successes in other areas

    On the Placement of Management and Control Functionality in Software Defined Networks

    Get PDF
    In order to support reactive and adaptive operations, Software-Defined Networking (SDN)-based management and control frameworks call for decentralized solutions. A key challenge to consider when deploying such solutions is to decide on the degree of distribution of the management and control functionality. In this paper, we develop an approach to determine the allocation of management and control entities by designing two algorithms to compute their placement. The algorithms rely on a set of input parameters which can be tuned to take into account the requirements of both the network infrastructure and the management applications to execute in the network. We evaluate the influence of these parameters on the configuration of the resulting management and control planes based on real network topologies and provide guidelines regarding the settings of the proposed algorithms

    In-Network Video Quality Adaption using Packet Trimming at the Edge

    Get PDF
    This paper describes the effects of running innetwork quality adaption by trimming the packets of layered video streams at the edge. The video stream is transmitted using the BPP transport protocol, which is like UDP, but has been designed to be both amenable to trimming and to provide low-latency and high reliability. The traffic adaption uses the Packet Wash process of BPP on the transmitted Scalable Video Coding (SVC) video streams as they pass through a network function which is BPP-aware and embedded at the edge. Our previous work has either demonstrated the use of SDN controllers to directly implement Packet Wash, or the use of a network function in the core of the network to do the same task. This paper presents the first attempt to deploy and evaluate such a process in the edge. We compare the performance of transmitting video using BPP and the Packet Wash trimming, against alternative transmission schemes, namely TCP, UDP, and HTTP Adaptive Streaming (HAS). The results demonstrate that providing traffic engineering using in-network quality adaption using packet trimming, provides high quality at the receiver
    • …
    corecore